Face Recognition Using Kernel Eigenfaces
نویسندگان
چکیده
Eigenface or Principal Component Analysis (PCA) methods have demonstrated their success in face recognition, detection, and tracking. The representation in PCA is based on the second order statistics of the image set, and does not address higher order statistical dependencies such as the relationships among three or more pixels. Recently Higher Order Statistics (HOS) have been used as a more informative low dimensional representation than PCA for face and vehicle detection. In this paper we investigate a generalization of PCA, Kernel Principal Component Analysis (Kernel PCA), for learning low dimensional representations in the context of face recognition. In contrast to HOS, Kernel PCA computes the higher order statistics without the combinatorial explosion of time and memory complexity. While PCA aims to find a second order correlation of patterns, Kernel PCA provides a replacement which takes into account higher order correlations. We compare the recognition results using kernel methods with Eigenface methods on two benchmarks. Empirical results show that Kernel PCA outperforms the Eigenface method in face recognition. 1. MOTIVATION AND APPROACH Subspace methods have been applied successfully in applications such as face recognition using Eigenfaces (or PCA face) [ll] [5], face detection [5], object recognition [6], and tracking [l]. Representations such as PCA encode the pattern information based on second order dependencies, i.e., pixelwise covariance among the pixels, and are insensitive to the dependencies of multiple (more than two) pixels in the patterns. Since the eigenvectors in PCA are the orthonormal bases, the principal components are uncorrelated. In other words, the coefficients for one of the axes cannot be linearly represented from the coefficients of the other axes. Higher order dependencies in an image include nonlinear relations among the pixel intensity values, such as the relationships among three or more pixels in an edge or a curve, which can capture important information for recognition. Several researchers have conjectured that higher order statistics may be crucial to better represent complex patterns. Recently, Higher Order Statistics (HOS) have been applied to visual learning problems. Rajagopalan et al. use HOS of the images of a target object to get a better approximation of an unknown distribution. Experiments on face detection [7] and vehicle detection [8] show comparable, if no better, results than other PCA-based methods. HOS usually works by projecting the input patterns to a higher dimensional space RF before computing the cumulants. The k-th order cumulant is defined in terms of its joint moments of order up to k. For zero mean random variables 21, 2 2 , 2 3 , 2 4 , the second, third and fourth order cumulants are given by Note the computation involved in HOS depends on the order of cumulants and is usually heavy because of computing expectations in a high dimensional space. In contrast to computing cumulants in HOS, we seek a formulation which computes the higher order statistics using only dot products, @(xi) @(xj), of the training patterns x where @ is a nonlinear projection function. Since we can compute these dot products efficiently, we can solve the original problem without explicitly mapping to RF. This is achieved using Mercer kernels where a kernel Ic(xi,xj) computes the dot product in some feature space RF, i.e., Ic(xi,xj) = The idea of using kernel methods has also been adopted in the Support Vector Machines (SVMs) in which kernel functions replace the nonlinear projection functions such that an optimal separating hyperplane can be constructed efficiently [2]. Scholkopf et al. proposed the use of Kernel PCA for object recognition in which the principal components of an object image comprise a feature vector to train a SVM [lo]. Empirical results on character recognition using MNIST data set and object recondition using MPI chair database @(Xi). @(Xj). 37 0-7803-6297-7/00/$10.00
منابع مشابه
Face Recognition using Eigenfaces , PCA and Supprot Vector Machines
This paper is based on a combination of the principal component analysis (PCA), eigenface and support vector machines. Using N-fold method and with respect to the value of N, any person’s face images are divided into two sections. As a result, vectors of training features and test features are obtain ed. Classification precision and accuracy was examined with three different types of kernel and...
متن کاملRecognizing Faces using Kernel Eigenfaces and Support Vector Machines
In face recognition, Principal Component Analysis (PCA) is often used to extract a low dimensional face representation based on the eigenvector of the face image autocorrelation matrix. Kernel Principal Component Analysis (Kernel PCA) has recently been proposed as a non-linear extension of PCA. While PCA is able to discover and represent linearly embedded manifolds, Kernel PCA can extract low d...
متن کاملEigenfaces and Support Vector Machine Approaches for Hybrid Face Recognition
Face Recognition has crucial effects in daily life especially for security purposes and their tasks are actively being used for many applications. In this study, we introduce a hybrid face recognition technique, consisting of two main parts namely feature extraction and classification. In the first part, as feature extracting techniques, we benefit from Eigenfaces method which is based on Princ...
متن کاملPerformance Evaluation of Face Recognition Using Pca
The face recognition problem is difficult by the great change in facial expression, head rotation and tilt, lighting intensity and angle, aging, partial occlusion (e.g. Wearing Hats, scarves, glasses etc.), etc. The Eigenfaces algorithm has long been a mainstay in the field of face recognition and the face space has high dimension. Principal components from the face space are used for face reco...
متن کاملPCA and kernel PCA using polynomial filtering: a case study on face recognition∗
Principal component analysis (PCA) is an extensively used dimensionality reduction technique, with important applications in many fields such as pattern recognition, computer vision and statistics. It employs the eigenvectors of the covariance matrix of the data to project it on a lower dimensional subspace. Kernel PCA, a generalized version of PCA, performs PCA implicitly in a nonlinearly tran...
متن کاملSummary : ” Eigenfaces for Recognition ” ( M . Turk , A . Pentland )
”Eigenfaces for Recognition” seeks to implement a system capable of efficient, simple, and accurate face recognition in a constrained environment (such as a household or an office). The system does not depend on 3-D models or intuitive knowledge of the structure of the face (eyes, nose, mouth). Classification is instead performed using a linear combination of characteristic features (eigenfaces...
متن کامل